This is the code used to identify and cut out each of the lifts from the IMU’s data. This process has been repeated for each subject, and the resulting data is stored as seperate R dataframe files for further analysis. The process is described below
Lets load all functions from the previous chapter. This is a hassle since they are stored in Quarto markdown language, and R only accept real R code.
Code
# Clean workspacerm(list =ls())library(tidyr)library(dplyr)library(plotly)library(ggplot2)library(knitr)library(lubridate)# Most dependencies are loaded by loading datafiltering.qmd.# To load only the chuncks containing functions we need parsermdlibrary(parsermd)toload <-c("load_data","load_plots", "filter_data", "separate_lifts", "visualise_seperate_lifts", "loadXsenseData2")rmd <-parse_rmd("datafiltering.qmd")for (i inseq_along(toload)) { setup_chunk <-rmd_select(rmd, toload[i]) |>as_document() setup_chunk <- setup_chunk[-grep("```", setup_chunk)] setup_chunk#> [1] "library(tidyr)" "library(stringr)" "" eval(parse(text = setup_chunk)) }rm(rmd, i, setup_chunk, toload)
I’ve collected metadata about all recorded lift in liftinfo.csv. This can be used to easialy load the lifts:
Warning: Specifying width/height in layout() is now deprecated.
Please specify in ggplotly() or plot_ly()
This was used to loop throuhg all the lifts
Code
#! A mistake of mine, This filters the analysed data (with the absolute values) instead of the raw imu data. cat("lift ", i)proefpersoon <-"female1_2105"cat("start time: ")start_time <-readline()cat("end time: ")end_time <-readline()df <- data1[[i]]time_column <-"time"output_filename <-paste0(proefpersoon, i, ".csv")# Filter the dataframefiltered_df <- df[df[[time_column]] >= start_time & df[[time_column]] <= end_time, ]# Save the filtered dataframe to a CSV filewrite.csv(filtered_df, output_filename, row.names =FALSE)i <- i +1
The next chapter will describe the analysis of the lifts.